The statistical heterogeneity of the non-independent and identically distributed (non-IID) data in local clients significantly limits the performance of federated learning. Previous attempts like FedProx, SCAFFOLD, MOON, FedNova and FedDyn resort to an optimization perspective, which requires an auxiliary term or re-weights local updates to calibrate the learning bias or the objective inconsistency. However, in addition to previous explorations for improvement in federated averaging, our analysis shows that another critical bottleneck is the poorer optima of client models in more heterogeneous conditions. We thus introduce a data-driven approach called FedSkip to improve the client optima by periodically skipping federated averaging and scattering local models to the cross devices. We provide theoretical analysis of the possible benefit from FedSkip and conduct extensive experiments on a range of datasets to demonstrate that FedSkip achieves much higher accuracy, better aggregation efficiency and competing communication efficiency. Source code is available at: https://github.com/MediaBrain-SJTU/FedSkip.
translated by 谷歌翻译
The mainstream workflow of image recognition applications is first training one global model on the cloud for a wide range of classes and then serving numerous clients, each with heterogeneous images from a small subset of classes to be recognized. From the cloud-client discrepancies on the range of image classes, the recognition model is desired to have strong adaptiveness, intuitively by concentrating the focus on each individual client's local dynamic class subset, while incurring negligible overhead. In this work, we propose to plug a new intra-client and inter-image attention (ICIIA) module into existing backbone recognition models, requiring only one-time cloud-based training to be client-adaptive. In particular, given a target image from a certain client, ICIIA introduces multi-head self-attention to retrieve relevant images from the client's historical unlabeled images, thereby calibrating the focus and the recognition result. Further considering that ICIIA's overhead is dominated by linear projection, we propose partitioned linear projection with feature shuffling for replacement and allow increasing the number of partitions to dramatically improve efficiency without scarifying too much accuracy. We finally evaluate ICIIA using 3 different recognition tasks with 9 backbone models over 5 representative datasets. Extensive evaluation results demonstrate the effectiveness and efficiency of ICIIA. Specifically, for ImageNet-1K with the backbone models of MobileNetV3-L and Swin-B, ICIIA can improve the testing accuracy to 83.37% (+8.11%) and 88.86% (+5.28%), while adding only 1.62% and 0.02% of FLOPs, respectively.
translated by 谷歌翻译
Pretrained language models have demonstrated extraordinary capabilities in language generation. However, real-world tasks often require controlling the distribution of generated text in order to mitigate bias, promote fairness, and achieve personalization. Existing techniques for controlling the distribution of generated text only work with quantified distributions, which require pre-defined categories, proportions of the distribution, or an existing corpus following the desired distributions. However, many important distributions, such as personal preferences, are unquantified. In this work, we tackle the problem of generating text following arbitrary distributions (quantified and unquantified) by proposing Nano, a few-shot human-in-the-loop training algorithm that continuously learns from human feedback. Nano achieves state-of-the-art results on single topic/attribute as well as quantified distribution control compared to previous works. We also show that Nano is able to learn unquantified distributions, achieves personalization, and captures differences between different individuals' personal preferences with high sample efficiency.
translated by 谷歌翻译
持续学习(CL)依次学习像人类这样的新任务,其目标是实现更好的稳定性(S,记住过去的任务)和可塑性(P,适应新任务)。由于过去的培训数据不可用,因此探索培训示例中S和P的影响差异很有价值,这可能会改善对更好的SP的学习模式。受影响函数的启发(如果),我们首先研究了示例通过添加扰动来示例体重和计算影响推导的影响。为了避免在神经网络中Hessian逆的存储和计算负担,我们提出了一种简单而有效的METASP算法,以模拟IF计算中的两个关键步骤,并获得S-和P-Aware示例的影响。此外,我们建议通过解决双目标优化问题来融合两种示例影响,并获得对SP Pareto最优性的融合影响。融合影响可用于控制模型的更新并优化排练的存储。经验结果表明,我们的算法在任务和类别基准CL数据集上都显着优于最先进的方法。
translated by 谷歌翻译
ICECUBE是一种用于检测1 GEV和1 PEV之间大气和天体中微子的光学传感器的立方公斤阵列,该阵列已部署1.45 km至2.45 km的南极的冰盖表面以下1.45 km至2.45 km。来自ICE探测器的事件的分类和重建在ICeCube数据分析中起着核心作用。重建和分类事件是一个挑战,这是由于探测器的几何形状,不均匀的散射和冰中光的吸收,并且低于100 GEV的光,每个事件产生的信号光子数量相对较少。为了应对这一挑战,可以将ICECUBE事件表示为点云图形,并将图形神经网络(GNN)作为分类和重建方法。 GNN能够将中微子事件与宇宙射线背景区分开,对不同的中微子事件类型进行分类,并重建沉积的能量,方向和相互作用顶点。基于仿真,我们提供了1-100 GEV能量范围的比较与当前ICECUBE分析中使用的当前最新最大似然技术,包括已知系统不确定性的影响。对于中微子事件分类,与当前的IceCube方法相比,GNN以固定的假阳性速率(FPR)提高了信号效率的18%。另外,GNN在固定信号效率下将FPR的降低超过8(低于半百分比)。对于能源,方向和相互作用顶点的重建,与当前最大似然技术相比,分辨率平均提高了13%-20%。当在GPU上运行时,GNN能够以几乎是2.7 kHz的中位数ICECUBE触发速率的速率处理ICECUBE事件,这打开了在在线搜索瞬态事件中使用低能量中微子的可能性。
translated by 谷歌翻译
本文研究了控制多机器人系统以自组织方式实现多边形形成的问题。与典型的形成控制策略不同,在该策略中,机器人被转向以满足预定义的控制变量,例如成对距离,相对位置和轴承,本文的最重要思想是通过将控制输入随机输入到一些机器人(说说)(说说) ,组的顶点机器人),其余的遵循的简单原理是向环形图中的两个最近邻居的中点移动,而没有任何外部输入。在我们的问题中,机器人最初分布在飞机上。 Sopalled Vertex机器人负责确定整个编队的几何形状及其整体大小,而其他人则移动,以最大程度地减少两个直接邻居的差异。在第一步中,每个顶点机器人估计其相关链中机器人的数量。用于估计的两种类型的控制输入是使用最新和最后两次瞬间的测量设计设计的。在第二步中,提出了自组织的形成控制法,只有顶点机器人收到外部信息。两种估计策略之间的比较是根据收敛速度和稳健性进行的。在模拟和物理实验中,整个控制框架的有效性得到了进一步验证。
translated by 谷歌翻译
现有的类新终身学习研究仅使用单标签的数据,这限制了其对多标签数据的适应性。本文研究了终身多标签(LML)分类,该分类在连续的多标签分类数据流中构建了在线类新型分类器。在LML分类中使用部分标签的数据培训可能会导致旧课程中更严重的灾难性遗忘。为了解决该问题,该研究提出了一个增强图卷积网络(AGCN),并在顺序的部分标签任务中具有建筑增强相关矩阵(ACM)。两个基准的结果表明,该方法可有效地分类和减少遗忘。
translated by 谷歌翻译
Many real-world problems are inherently multimodal, from the communicative modalities humans use to express social and emotional states to the force, proprioception, and visual sensors ubiquitous on robots. While there has been an explosion of interest in multimodal representation learning, these methods are still largely focused on a small set of modalities, primarily in the language, vision, and audio space. In order to accelerate generalization towards diverse and understudied modalities, this paper studies efficient representation learning for high-modality scenarios. Since adding new models for every new modality or task becomes prohibitively expensive, a critical technical challenge is heterogeneity quantification: how can we measure which modalities encode similar information and interactions in order to permit parameter sharing with previous modalities? We propose two new information-theoretic metrics for heterogeneity quantification: (1) modality heterogeneity studies how similar 2 modalities $\{X_1,X_2\}$ are by measuring how much information can be transferred from $X_1$ to $X_2$, while (2) interaction heterogeneity studies how similarly pairs of modalities $\{X_1,X_2\}, \{X_3,X_4\}$ interact by measuring how much interaction information can be transferred from $\{X_1,X_2\}$ to $\{X_3,X_4\}$. We show the importance of these proposed metrics in high-modality scenarios as a way to automatically prioritize the fusion of modalities that contain unique information or interactions. The result is a single model, HighMMT, that scales up to $10$ modalities and $15$ tasks from $5$ different research areas. Not only does HighMMT outperform prior methods on the tradeoff between performance and efficiency, it also demonstrates a crucial scaling behavior: performance continues to improve with each modality added, and transfers to entirely new modalities and tasks during fine-tuning.
translated by 谷歌翻译
我们介绍了一种新颖的骨干架构,提高特征表示的目标感知能力。具体地,已经观察到事实上框架简单地使用来自骨干网的输出来执行特征匹配,从备份目标本地化,没有从匹配模块到骨干网的直接反馈,尤其是浅层。更具体地,只有匹配模块可以直接访问目标信息(在参考帧中),而候选帧的表示学习对参考目标是盲目的。结果,浅级中的目标 - 无关干扰的累积效果可能降低更深层的特征质量。在本文中,我们通过在暹罗类似的骨干网(inbn)内进行多个分支 - 方面交互来从不同角度接近问题。在INBN的核心是一个通用交互建模器(GIM),其将参考图像的先前知识注入骨干网络的不同阶段,导致候选特征表示的更好的目标感知和鲁棒的牵引力,其计算成本具有可忽略的计算成本。所提出的GIM模块和INBN机制是一般的,适用于不同的骨干类型,包括CNN和变压器,以改进,如我们在多个基准上的广泛实验所证明的那样。特别是,CNN版本(基于Siamcar),分别在Lasot / TNL2K上改善了3.2 / 6.9的Suc绝对收益。变压器版本获取Lasot / TNL2K的SUC 25.7 / 52.0,与最近的艺术态度相提并论。代码和模型将被释放。
translated by 谷歌翻译
胰腺癌是世界上最严重恶性的癌症之一,这种癌症迅速迅速,具有很高的死亡率。快速的现场评估(玫瑰)技术通过立即分析与现场病理学家的快速染色的细胞影析学形象来创新工作流程,这使得在这种紧压的过程中能够更快的诊断。然而,由于缺乏经验丰富的病理学家,玫瑰诊断的更广泛的扩张已经受到阻碍。为了克服这个问题,我们提出了一个混合高性能深度学习模型,以实现自动化工作流程,从而释放占据病理学家的宝贵时间。通过使用我们特定的多级混合设计将变压器块引入该字段,由卷积神经网络(CNN)产生的空间特征显着增强了变压器全球建模。转向多级空间特征作为全球关注指导,这种设计将鲁棒性与CNN的感应偏差与变压器的复杂全球建模功能相结合。收集4240朵Rose图像的数据集以评估此未开发领域的方法。所提出的多级混合变压器(MSHT)在分类精度下实现95.68%,其鲜明地高于最先进的模型。面对对可解释性的需求,MSHT以更准确的关注区域表达其对应物。结果表明,MSHT可以以前所未有的图像规模精确地区分癌症样本,奠定了部署自动决策系统的基础,并在临床实践中扩大玫瑰。代码和记录可在:https://github.com/sagizty/multi-stage-ybrid-transformer。
translated by 谷歌翻译